138 research outputs found

    Road Segmentation in SAR Satellite Images with Deep Fully-Convolutional Neural Networks

    Get PDF
    Remote sensing is extensively used in cartography. As transportation networks grow and change, extracting roads automatically from satellite images is crucial to keep maps up-to-date. Synthetic Aperture Radar satellites can provide high resolution topographical maps. However roads are difficult to identify in these data as they look visually similar to targets such as rivers and railways. Most road extraction methods on Synthetic Aperture Radar images still rely on a prior segmentation performed by classical computer vision algorithms. Few works study the potential of deep learning techniques, despite their successful applications to optical imagery. This letter presents an evaluation of Fully-Convolutional Neural Networks for road segmentation in SAR images. We study the relative performance of early and state-of-the-art networks after carefully enhancing their sensitivity towards thin objects by adding spatial tolerance rules. Our models shows promising results, successfully extracting most of the roads in our test dataset. This shows that, although Fully-Convolutional Neural Networks natively lack efficiency for road segmentation, they are capable of good results if properly tuned. As the segmentation quality does not scale well with the increasing depth of the networks, the design of specialized architectures for roads extraction should yield better performances.Comment: 5 pages, accepted for publication in IEEE Geoscience and Remote Sensing Letter

    Collision Avoidance of multiple MAVs using a multiple Outputs to Input Saturation Technique

    Get PDF
    This paper proposes a novel collision avoidance scheme for MAVs. This scheme is based on the use of a recent technique which is based on the transformation of state constraints into timevarying control input saturations. Here, this technique is extended so as to ensure collision avoidance of a formation of up to three MAVs. Experimental results involving three A.R drones show the efficiency of the approach

    Road condition assessment from aerial imagery using deep learning

    Get PDF
    Terrestrial sensors are commonly used to inspect and document the condition of roads at regular intervals and according to defined rules. For example in Germany, extensive data and information is obtained, which is stored in the Federal Road Information System and made available in particular for deriving necessary decisions. Transverse and longitudinal evenness, for example, are recorded by vehicles using laser techniques. To detect damage to the road surface, images are captured and recorded using area or line scan cameras. All these methods provide very accurate information about the condition of the road, but are time-consuming and costly. Aerial imagery (e.g. multi- or hyperspectral, SAR) provide an additional possibility for the acquisition of the specific parameters describing the condition of roads, yet a direct transfer from objects extractable from aerial imagery to the required objects or parameters, which determine the condition of the road is difficult and in some cases impossible. In this work, we investigate the transferability of objects commonly used for the terrestrial-based assessment of road surfaces to an aerial image-based assessment. In addition, we generated a suitable dataset and developed a deep learning based image segmentation method capable of extracting two relevant road condition parameters from high-resolution multispectral aerial imagery, namely cracks and working seams. The obtained results show that our models are able to extraction these thin features from aerial images, indicating the possibility of using more automated approaches for road surface condition assessment in the future

    Building Section Instance Segmentation with Combined Classical and Deep Learning Methods

    Get PDF
    In big cities, the complexity of urban infrastructure is very high. In city centers, one construction can consist of several building sections of different heights or roof geometries. Most of the existing approaches detect those buildings as a single construction in the form of binary building segmentation maps or as one instance of object-oriented segmentation. However, reconstructing complex buildings consisting of several parts requires a higher level of detail. In this work, we present a methodology for individual building section instance segmentation on satellite imagery. We show that fully convolutional networks (FCNs) can tackle the issue much better than the state-of-the-art Mask-RCNN. A ground truth raster image with pixel value 1 for building sections and 2 for their touching borders was generated to train models on predicting both classes as a semantic output. The semantic outputs were then post-processed with the help of morphology and watershed labeling to generate segmentation on the instance level. The combination of a deep learning-based approach and a classical image processing algorithm allowed us to fulfill the segmentation task on the instance level and reach high-quality results with an mAP of up to 42 %

    UAV Obstacle Avoidance Scheme Using an Output to Input Saturation Transformation Technique

    Get PDF
    This paper presents a novel obstacle avoidance scheme for UAVs. This scheme is based on the use of a technique recently developed by one of the authors, which is based on a transformation of a variable constraint into an input saturation. In the case of obstacle avoidance, this saturation is designed so as to ensure a safe trajectory around the obstacles, offering a proof of this desired behavior. A low-cost RGB-D sensor has been used to detect obstacles as its output measurements of the environment are effortlessly interpreted even with a low power embedded processor. Experimental results are provided, together with a simulation, to prove the efficiency of the approach

    SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks - Optimization, Opportunities and Limits

    Get PDF
    Due to its all time capability, synthetic aperture radar (SAR) remote sensing plays an important role in Earth observation. The ability to interpret the data is limited, even for experts, as the human eye is not familiar to the impact of distance-dependent imaging, signal intensities detected in the radar spectrum as well as image characteristics related to speckle or steps of post-processing. This paper is concerned with machine learning for SAR-to-optical image-to-image translation in order to support the interpretation and analysis of original data. A conditional adversarial network is adopted and optimized in order to generate alternative SAR image representations based on the combination of SAR images (starting point) and optical images (reference) for training. Following this strategy, the focus is set on the value of empirical knowledge for initialization, the impact of results on follow-up applications, and the discussion of opportunities/drawbacks related to this application of deep learning. Case study results are shown for high resolution (SAR: TerraSAR-X, optical: ALOS PRISM) and low resolution (Sentinel-1 and -2) data. The properties of the alternative image representation are evaluated based on feedback from experts in SAR remote sensing and the impact on road extraction as an example for follow-up applications. The results provide the basis to explain fundamental limitations affecting the SAR-to-optical image translation idea but also indicate benefits from alternative SAR image representations

    Detecting and Estimating On-street Parking Areas from Aerial Images

    Get PDF
    Parking is an essential part of transportation systems and urban planning, but the availability of data on parking is limited and therefore posing problems, for example, estimating search times for parking spaces in travel demand models. This paper presents an on-street parking area prediction model developed using remote sensing and open geospatial data of the German city of Brunswick. Neural networks are used to segment the aerial images in parking and street areas. To enhance the robustness of this detection, multiple predictions over same regions are fused. We enrich this information with publicly available data and formulate a Bayesian inference model to predict the parking area per street meter. The model is estimated and validated using detected parking areas from the aerial images. We find that the prediction accuracy of the parking area model at mid to high levels of parking area per street meter is good, but at lower levels uncertainty increases. Using a Bayesian inference model allows the uncertainty of the prediction to be passed on to subsequent applications to track error propagation. Since only open source data serve as input for the prediction model, a transfer to structurally similar regions, for which no aerial images are available, is possible. The model can be used in a wide range of applications like travel demand models, parking regulation and urban planning
    corecore